Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AI auditing: The Broken Bus on the Road to AI Accountability (2401.14462v1)

Published 25 Jan 2024 in cs.CY

Abstract: One of the most concrete measures to take towards meaningful AI accountability is to consequentially assess and report the systems' performance and impact. However, the practical nature of the "AI audit" ecosystem is muddled and imprecise, making it difficult to work through various concepts and map out the stakeholders involved in the practice. First, we taxonomize current AI audit practices as completed by regulators, law firms, civil society, journalism, academia, consulting agencies. Next, we assess the impact of audits done by stakeholders within each domain. We find that only a subset of AI audit studies translate to desired accountability outcomes. We thus assess and isolate practices necessary for effective AI audit results, articulating the observed connections between AI audit design, methodology and institutional context on its effectiveness as a meaningful mechanism for accountability.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (184)
  1. I. D. Raji, I. E. Kumar, A. Horowitz, and A. Selbst, “The fallacy of ai functionality,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 959–972.
  2. Z. Obermeyer and S. Mullainathan, “Dissecting racial bias in an algorithm that guides health decisions for 70 million people,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019, pp. 89–89.
  3. C. H. Chu, R. Nyrup, K. Leslie, J. Shi, A. Bianchi, A. Lyn, M. McNicholl, S. Khan, S. Rahimi, and A. Grenier, “Digital ageism: Challenges and opportunities in artificial intelligence for older adults,” The Gerontologist, vol. 62, no. 7, pp. 947–955, 2022.
  4. B. Imana, A. Korolova, and J. Heidemann, “Auditing for Discrimination in Algorithms Delivering Job Ads,” in Proceedings of the Web Conference 2021.   Ljubljana Slovenia: ACM, Apr. 2021, pp. 3767–3778. [Online]. Available: https://dl.acm.org/doi/10.1145/3442381.3450077
  5. S. Luccioni, C. Akiki, M. Mitchell, and Y. Jernite, “Stable Bias: Evaluating Societal Representations in Diffusion Models,” in Thirty-Seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, Nov. 2023. [Online]. Available: https://openreview.net/forum?id=qVXYU3F017
  6. P. Barlas, K. Kyriakou, O. Guest, S. Kleanthous, and J. Otterbacher, “To ‘see’ is to stereotype: Image tagging algorithms, gender recognition, and the accuracy-fairness trade-off,” Proceedings of the ACM on Human-Computer Interaction, vol. 4, no. CSCW3, pp. 1–31, 2021.
  7. M. K. Scheuerman, J. M. Paul, and J. R. Brubaker, “How computers see gender: An evaluation of gender classification in commercial facial analysis services,” Proceedings of the ACM on Human-Computer Interaction, vol. 3, no. CSCW, pp. 1–33, 2019.
  8. I. Kilovaty, “Legally cognizable manipulation,” Berkeley Tech. LJ, vol. 34, p. 449, 2019.
  9. C. Cadwalladr and E. Graham-Harrison, “Revealed: 50 million facebook profiles harvested for cambridge analytica in major data breach,” The Guardian, vol. 17, no. 1, p. 22, 2018.
  10. I. D. Raji, P. Xu, C. Honigsberg, and D. Ho, “Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance,” in Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, ser. AIES ’22.   New York, NY, USA: Association for Computing Machinery, Jul. 2022, pp. 557–571. [Online]. Available: https://dl.acm.org/doi/10.1145/3514094.3534181
  11. C. Sandvig, K. Hamilton, K. Karahalios, and C. Langbort, “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms,” in Data and Discrimination: Converting Critical Concerns into Productive: A Preconference at the 64th Annual Meeting of the International Communication Association, Seattle, WA, 2014, p. 23.
  12. F. Cherry and M. Bendick, “Making it count: Discrimination auditing and the activist scholar tradition,” Audit studies: Behind the scenes with theory, method, and nuance, pp. 45–62, 2018.
  13. B. Vecchione, S. Barocas, and K. Levy, “Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies,” arXiv:2109.06974 [cs], Sep. 2021. [Online]. Available: http://arxiv.org/abs/2109.06974
  14. N. Carlini, J. Hayes, M. Nasr, M. Jagielski, V. Sehwag, F. Tramer, B. Balle, D. Ippolito, and E. Wallace, “Extracting training data from diffusion models,” in 32nd USENIX Security Symposium (USENIX Security 23), 2023, pp. 5253–5270.
  15. V. Lai, C. Chen, A. Smith-Renner, Q. V. Liao, and C. Tan, “Towards a Science of Human-AI Decision Making: An Overview of Design Space in Empirical Human-Subject Studies,” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, ser. FAccT ’23.   New York, NY, USA: Association for Computing Machinery, Jun. 2023, pp. 1369–1385. [Online]. Available: https://dl.acm.org/doi/10.1145/3593013.3594087
  16. S. Rivera, X. Liu, A. Chan, A. Denniston, and M. Calvert, “Guidelines for clinical trial protocols for interventions involving artificial intelligence: the spirit-ai extension. bmj 370,” 2020.
  17. I. D. Raji, A. Smart, R. N. White, M. Mitchell, T. Gebru, B. Hutchinson, J. Smith-Loud, D. Theron, and P. Barnes, “Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, ser. FAT* ’20.   New York, NY, USA: Association for Computing Machinery, Jan. 2020, pp. 33–44. [Online]. Available: https://dl.acm.org/doi/10.1145/3351095.3372873
  18. M. Wieringa, “What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 1–18.
  19. I. D. Raji, S. C. Chock, and J. Buolamwini, “Change from the outside: Towards credible third-party audits of ai systems,” Missing links in AI governance, p. 5, 2023.
  20. S. Brown, J. Davidovic, and A. Hasan, “The algorithm audit: Scoring the algorithms that score us,” Big Data & Society, vol. 8, no. 1, p. 2053951720983865, 2021.
  21. Ada Lovelace Institute, “Examining the Black Box,” Ada Lovelace Institute, Tech. Rep., Apr. 2020. [Online]. Available: https://www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems/
  22. ——, “Technical methods for regulatory inspection of algorithmic systems,” Ada Lovelace Institute, Tech. Rep., Dec. 2021. [Online]. Available: https://www.adalovelaceinstitute.org/report/technical-methods-regulatory-inspection/
  23. S. Costanza-Chock, I. D. Raji, and J. Buolamwini, “Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, ser. FAccT ’22.   New York, NY, USA: Association for Computing Machinery, Jun. 2022, pp. 1571–1583. [Online]. Available: https://doi.org/10.1145/3531146.3533213
  24. A. D. Selbst, “An institutional view of algorithmic impact,” Harvard Journal of Law & Technology, vol. 35, no. 1, 2021.
  25. J. Bandy, “Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits,” Proceedings of the ACM on Human-Computer Interaction, vol. 5, no. CSCW1, pp. 74:1–74:34, Apr. 2021. [Online]. Available: https://doi.org/10.1145/3449148
  26. R. Steed and A. Caliskan, “Image representations learned with unsupervised pre-training contain human-like biases,” in Proceedings of the 2021 ACM conference on Fairness, Accountability, and Transparency, 2021, pp. 701–713.
  27. M. Heikkilä, “How it feels to be sexually objectified by an AI,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/13/1064810/how-it-feels-to-be-sexually-objectified-by-an-ai/
  28. T. Feathers, “Proctorio Is Using Racist Algorithms to Detect Faces,” Vice, Apr. 2021. [Online]. Available: https://www.vice.com/en/article/g5gxg3/proctorio-is-using-racist-algorithms-to-detect-faces
  29. H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar et al., “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023.
  30. A. Birhane and V. U. Prabhu, “Large image datasets: A pyrrhic win for computer vision?” in 2021 IEEE Winter Conference on Applications of Computer Vision (WACV).   IEEE, 2021, pp. 1536–1546.
  31. O. Solon, “Facial recognition’s ‘dirty little secret’: Millions of online photos scraped without consent,” NBC News, vol. 12, 2019.
  32. A. Reisner, “Revealed: The authors whose pirated books are powering generative ai,” The Atlantic, 2023.
  33. C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, “Fairness through awareness,” in Proceedings of the 3rd innovations in theoretical computer science conference, 2012, pp. 214–226.
  34. I. D. Raji and J. Buolamwini, “Actionable Auditing Revisited: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products,” Communications of the ACM, vol. 66, no. 1, pp. 101–108, Dec. 2022. [Online]. Available: https://dl.acm.org/doi/10.1145/3571151
  35. J. Angwin, J. Larson, S. Mattu, and L. Kirchner, “Machine Bias,” ProPublica, May 2016. [Online]. Available: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  36. A. Chouldechova, D. Benavides-Prado, O. Fialko, and R. Vaithianathan, “A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions,” in Conference on Fairness, Accountability and Transparency.   PMLR, 2018, pp. 134–148.
  37. L. Chen, A. Mislove, and C. Wilson, “Peeking beneath the hood of uber,” in Proceedings of the 2015 Internet Measurement Conference, 2015, pp. 495–508.
  38. ——, “An empirical analysis of algorithmic pricing on amazon marketplace,” in Proceedings of the 25th international conference on World Wide Web, 2016, pp. 1339–1349.
  39. J. Buolamwini and T. Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” in Proceedings of the 1st Conference on Fairness, Accountability and Transparency, ser. Proceedings of Machine Learning Research, S. A. Friedler and C. Wilson, Eds., vol. 81.   New York, NY, USA: PMLR, Jan. 2018, pp. 77–91. [Online]. Available: http://proceedings.mlr.press/v81/buolamwini18a.html
  40. A. Koenecke, A. Nam, E. Lake, J. Nudell, M. Quartey, Z. Mengesha, C. Toups, J. R. Rickford, D. Jurafsky, and S. Goel, “Racial disparities in automated speech recognition,” Proceedings of the National Academy of Sciences, vol. 117, no. 14, pp. 7684–7689, 2020.
  41. A. Mandal, S. Little, and S. Leavy, “Gender bias in multimodal models: A transnational feminist approach considering geographical region and culture,” arXiv preprint arXiv:2309.04997, 2023.
  42. C. Wilson, A. Ghosh, S. Jiang, A. Mislove, L. Baker, J. Szary, K. Trindel, and F. Polli, “Building and auditing fair algorithms: A case study in candidate screening,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021, pp. 666–677.
  43. K. Lum and W. Isaac, “To predict and serve?” Significance, vol. 13, no. 5, pp. 14–19, 2016.
  44. A. Koenecke, E. Giannella, R. Willer, and S. Goel, “Popular support for balancing equity and efficiency in resource allocation: A case study in online advertising to increase welfare program awareness,” in Proceedings of the International AAAI Conference on Web and Social Media, vol. 17, 2023, pp. 494–506.
  45. P. Sapiezynski, A. Ghosh, L. Kaplan, A. Rieke, and A. Mislove, “Algorithms that ”Don’t See Color”: Comparing Biases in Lookalike and Special Ad Audiences,” in Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, Jul. 2022, pp. 609–616. [Online]. Available: http://arxiv.org/abs/1912.07579
  46. I. D. Raji, R. Steed, V. Ojewale, B. Vecchione, and A. Birhane, “Towards AI Accountability Infrastructure: Gaps and opportunities in AI audit tooling,” in To Appear, 2024 CHI Conference on Human Factors in Computing Systems.   Association for Computing Machinery, 2024.
  47. S. McGregor, “Preventing repeated real world AI failures by cataloging incidents: The AI incident database,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 17, 2021, pp. 15 458–15 463.
  48. K. Fink, “Opening the government’s black boxes: freedom of information and algorithmic accountability,” Information, Communication & Society, vol. 21, no. 10, pp. 1453–1471, 2018.
  49. A. Jobin, M. Ienca, and E. Vayena, “The global landscape of AI ethics guidelines,” Nature machine intelligence, vol. 1, no. 9, pp. 389–399, 2019.
  50. I. Ajunwa, S. Friedler, C. E. Scheidegger, and S. Venkatasubramanian, “Hiring by algorithm: predicting and preventing disparate impact,” Available at SSRN, 2016.
  51. J. Bandy, “Problematic machine behavior: A systematic literature review of algorithm audits,” Proceedings of the ACM on Human-Computer Interaction, vol. 5, no. CSCW1, pp. 1–34, 2021.
  52. P. Krafft, M. Young, M. Katell, J. E. Lee, S. Narayan, M. Epstein, D. Dailey, B. Herman, A. Tam, V. Guetler et al., “An action-oriented ai policy toolkit for technology audits by community advocates and activists,” in Proceedings of the 2021 ACM conference on Fairness, Accountability, and Transparency, 2021, pp. 772–781.
  53. S. Jiang, R. E. Robertson, and C. Wilson, “Bias Misperceived:The Role of Partisanship and Misinformation in YouTube Comment Moderation,” Proceedings of the International AAAI Conference on Web and Social Media, vol. 13, pp. 278–289, Jul. 2019. [Online]. Available: https://ojs.aaai.org/index.php/ICWSM/article/view/3229
  54. E. Black, H. Elzayn, A. Chouldechova, J. Goldin, and D. Ho, “Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax Audit Models,” in 2022 ACM Conference on Fairness, Accountability, and Transparency.   Seoul Republic of Korea: ACM, Jun. 2022, pp. 1479–1503. [Online]. Available: https://dl.acm.org/doi/10.1145/3531146.3533204
  55. A. Abid, M. Farooqi, and J. Zou, “Persistent Anti-Muslim Bias in Large Language Models,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, ser. AIES ’21.   New York, NY, USA: Association for Computing Machinery, Jul. 2021, pp. 298–306. [Online]. Available: https://doi.org/10.1145/3461702.3462624
  56. A. Birhane and V. U. Prabhu, “Large image datasets: A pyrrhic win for computer vision?” in 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), Jan. 2021, pp. 1536–1546.
  57. A. Brown, A. Chouldechova, E. Putnam-Hornstein, A. Tobin, and R. Vaithianathan, “Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, ser. CHI ’19.   New York, NY, USA: Association for Computing Machinery, May 2019, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3290605.3300271
  58. P. Kalluri et al., “Don’t ask if artificial intelligence is good or fair, ask how it shifts power,” Nature, vol. 583, no. 7815, pp. 169–169, 2020.
  59. J. Metcalf, E. Moss, E. A. Watkins, R. Singh, and M. C. Elish, “Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, ser. FAccT ’21.   New York, NY, USA: Association for Computing Machinery, Mar. 2021, pp. 735–746. [Online]. Available: https://dl.acm.org/doi/10.1145/3442188.3445935
  60. A. Birhane, V. U. Prabhu, and E. Kahembwe, “Multimodal datasets: Misogyny, pornography, and malignant stereotypes,” Oct. 2021. [Online]. Available: http://arxiv.org/abs/2110.01963
  61. M. Bao, A. Zhou, S. Zottola, B. Brubach, S. Desmarais, A. Horowitz, K. Lum, and S. Venkatasubramanian, “It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks,” arXiv:2106.05498 [cs], Jun. 2021. [Online]. Available: http://arxiv.org/abs/2106.05498
  62. P. Samuelson, “Generative AI meets copyright,” Science, vol. 381, no. 6654, pp. 158–161, 2023.
  63. L. Stapleton, M. H. Lee, D. Qing, M. Wright, A. Chouldechova, K. Holstein, Z. S. Wu, and H. Zhu, “Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 1162–1177.
  64. E. Radiya-Dixit and G. Neff, “A Sociotechnical Audit: Assessing Police Use of Facial Recognition,” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, ser. FAccT ’23.   New York, NY, USA: Association for Computing Machinery, Jun. 2023, pp. 1334–1346. [Online]. Available: https://dl.acm.org/doi/10.1145/3593013.3594084
  65. L. Stapleton, M. H. Lee, D. Qing, M. Wright, A. Chouldechova, K. Holstein, Z. S. Wu, and H. Zhu, “Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, ser. FAccT ’22.   New York, NY, USA: Association for Computing Machinery, Jun. 2022, pp. 1162–1177. [Online]. Available: https://dl.acm.org/doi/10.1145/3531146.3533177
  66. A. Woodruff, S. E. Fox, S. Rousso-Schindler, and J. Warshaw, “A Qualitative Exploration of Perceptions of Algorithmic Fairness,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, ser. CHI ’18.   New York, NY, USA: Association for Computing Machinery, Apr. 2018, pp. 1–14. [Online]. Available: https://dl.acm.org/doi/10.1145/3173574.3174230
  67. I. Solaiman, Z. Talat, W. Agnew, L. Ahmad, D. Baker, S. L. Blodgett, H. Daumé III, J. Dodge, E. Evans, S. Hooker, Y. Jernite, A. S. Luccioni, A. Lusoli, M. Mitchell, J. Newman, M.-T. Png, A. Strait, and A. Vassilev, “Evaluating the Social Impact of Generative AI Systems in Systems and Society,” Jun. 2023. [Online]. Available: http://arxiv.org/abs/2306.05949
  68. E. Tabassi, “AI Risk Management Framework: AI RMF (1.0),” National Institute of Standards and Technology, Gaithersburg, MD, Tech. Rep. error: NIST AI 100-1, 2023. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
  69. D. Ramesh, V. Kameswaran, D. Wang, and N. Sambasivan, “How Platform-User Power Relations Shape Algorithmic Accountability: A Case Study of Instant Loan Platforms and Financially Stressed Users in India,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, ser. FAccT ’22.   New York, NY, USA: Association for Computing Machinery, Jun. 2022, pp. 1917–1928. [Online]. Available: https://dl.acm.org/doi/10.1145/3531146.3533237
  70. J. Singh, “Google to prohibit personal loan apps from accessing user photos, contacts,” Apr. 2023. [Online]. Available: https://techcrunch.com/2023/04/05/google-personal-loan-apps-update/
  71. A. Torralba, R. Fergus, and B. Freeman, “80 Million Tiny Images,” Jun. 2020. [Online]. Available: https://groups.csail.mit.edu/vision/TinyImages/
  72. M. Murgia and M. Harlow, “Who’s using your face? The ugly truth about facial recognition,” Financial Times, Sep. 2019. [Online]. Available: https://www.ft.com/content/cf19b956-60a2-11e9-b285-3acd5d43599e
  73. M. Murgia, “Microsoft quietly deletes largest public face recognition data set,” Financial Times, Jun. 2019. [Online]. Available: https://www.ft.com/content/7d3e0d6a-87a0-11e9-a028-86cea8523dc2
  74. H. Suresh and J. Guttag, “A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle,” in Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’21.   New York, NY, USA: Association for Computing Machinery, Nov. 2021, pp. 1–9. [Online]. Available: https://dl.acm.org/doi/10.1145/3465416.3483305
  75. R. Schwartz, J. Dodge, N. A. Smith, and O. Etzioni, “Green AI,” Communications of the ACM, vol. 63, no. 12, pp. 54–63, Nov. 2020. [Online]. Available: https://dl.acm.org/doi/10.1145/3381831
  76. B. Rakova and R. Dobbe, “Algorithms as social-ecological-technological systems: an environmental justice lens on algorithmic audits,” arXiv preprint arXiv:2305.05733, 2023.
  77. B. Gansky and S. McDonald, “CounterFAccTual: How FAccT Undermines Its Organizing Principles,” in 2022 ACM Conference on Fairness, Accountability, and Transparency.   Seoul Republic of Korea: ACM, Jun. 2022, pp. 1982–1992. [Online]. Available: https://dl.acm.org/doi/10.1145/3531146.3533241
  78. E. P. Goodman and J. Trehu, “AI Audit Washing and Accountability,” Rochester, NY, Sep. 2022. [Online]. Available: https://papers.ssrn.com/abstract=4227350
  79. M. Sloane, E. Moss, and R. Chowdhury, “A Silicon Valley love triangle: Hiring algorithms, pseudo-science, and the quest for auditability,” Patterns (New York, N.Y.), vol. 3, no. 2, p. 100425, Feb. 2022.
  80. Luminos.Law, “Public Resources.” [Online]. Available: https://luminos.law/resources
  81. Foxglove, “Who We Are.” [Online]. Available: https://www.foxglove.org.uk/who-we-are/
  82. AWO, “AWO,” 2023. [Online]. Available: https://awo.agency/
  83. Luminos.Law, “Clients.” [Online]. Available: https://luminos.law/our-clients
  84. AWO, “Services.” [Online]. Available: https://awo.agency/
  85. Luminos.Law, “Our Work.” [Online]. Available: https://luminos.law/our-work
  86. AWO, “Blog.” [Online]. Available: https://awo.agency/
  87. M. Dark, “Home Office says it will abandon its racist visa algorithm - after we sued them,” Aug. 2020. [Online]. Available: https://www.foxglove.org.uk/2020/08/04/home-office-says-it-will-abandon-its-racist-visa-algorithm-after-we-sued-them/
  88. ——, “We put a stop to the A Level grading algorithm!” Aug. 2020. [Online]. Available: https://www.foxglove.org.uk/2020/08/17/we-put-a-stop-to-the-a-level-grading-algorithm/
  89. Foxglove, “Areas of Work.” [Online]. Available: https://www.foxglove.org.uk/who-we-are/areas-of-work/
  90. T. Hegarty, “The government has scrapped the deadline for the NHS Data Grab,” Jul. 2021. [Online]. Available: https://www.foxglove.org.uk/2021/07/22/the-government-has-scrapped-the-deadline-for-the-nhs-data-grab/
  91. B. Tau, “Banning TikTok in the U.S. Is Easier Said Than Done,” Wall Street Journal, Mar. 2023. [Online]. Available: https://www.wsj.com/articles/tiktok-ban-legal-explained-bbeb21c2
  92. S. Lynch, “A New Anti-Bias A.I. Hiring Law Is Now in Effect. How to Know If You’re in Compliance,” Inc.com, Jul. 2023. [Online]. Available: https://www.inc.com/sarah-lynch/new-anti-bias-ai-hiring-law-now-in-effect-how-to-comply.html
  93. AWO, “AWO analysis shows gaps in effective protection from AI harms,” Jul. 2023. [Online]. Available: https://awo.agency/
  94. Luminos.Law, “Microwave.” [Online]. Available: https://www.luminos.ai/microwave
  95. L. A. Cumbo, A. Ampry-Samuel, H. K. Rosenthal, R. E. Cornegy, B. Kallos, A. E. Adams, F. N. Louis, M. S. Chin, F. Cabrera, D. L. Rose, V. L. Gibson, J. L. Brannan, C. Rivera, M. Levine, D. I. Ayala, I. D. Miller, S. T. Levin, and I. D. Barron, “A Local Law to amend the administrative code of the city of New York, in relation to automated employment decision tools,” Dec. 2021. [Online]. Available: https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page
  96. A. Brennen, R. Ashley, R. Calix, JJ. Ben-Joseph, G. Sieniawski, and M. Gogia, “AI Assurance Audit of RoBERTa, an Open source, Pretrained Large Language Model,” IQT Labs, Tech. Rep., Dec. 2022. [Online]. Available: https://assets.iqt.org/pdfs/IQTLabs_RoBERTaAudit_Dec2022_final.pdf/web/viewer.html
  97. R. A. Calix, J. Ben-Joseph, N. Lopatina, R. Ashley, M. Gogia, G. Sieniawski, and A. Brennen, “Saisiyat Is Where It Is At! Insights Into Backdoors And Debiasing Of Cross Lingual Transformers For Named Entity Recognition,” in 2022 IEEE International Conference on Big Data (Big Data), Dec. 2022, pp. 2940–2949. [Online]. Available: https://ieeexplore.ieee.org/document/10020403
  98. M. W. Lee, N. Stringer, and N. Zanini, “Student-level equalities analyses for GCSE and A level,” Ofqual, Tech. Rep., 2020.
  99. The Joint Council for the Welfare of Immigrants, “We won! Home Office to stop using racist visa algorithm,” Aug. 2020. [Online]. Available: https://www.jcwi.org.uk/News/we-won-home-office-to-stop-using-racist-visa-algorithm
  100. M. Fitzgerald and C. Crider, “Under pressure, UK government releases NHS COVID data deals with big tech,” Jun. 2020. [Online]. Available: https://www.opendemocracy.net/en/ournhs/under-pressure-uk-government-releases-nhs-covid-data-deals-big-tech/
  101. Parity Consulting, “Parity Consulting.” [Online]. Available: https://www.get-parity.com
  102. B. Cassidy, R. Hittner, B. Crowley, Z. Bowman, and J. Fogarty, “An auditor’s mindset in an AI-driven world,” Tech. Rep., 2022. [Online]. Available: https://www2.deloitte.com/content/dam/Deloitte/us/Documents/deloitte-analytics/us-ai-institute-auditors-mindset.pdf
  103. K. Buehler, R. Dooley, L. Grennan, and A. Singla, “Identifying and managing your biggest AI risks | McKinsey,” May 2021. [Online]. Available: https://www.mckinsey.com/capabilities/quantumblack/our-insights/getting-to-know-and-manage-your-biggest-ai-risks
  104. Accenture, “AI ethics & governance,” 2023. [Online]. Available: https://www.accenture.com/us-en/services/applied-intelligence/ai-ethics-governance
  105. Business for Social Responsibility, “Google Celebrity Recognition API Human Rights Assessment: Executive Summary,” Tech. Rep., Oct. 2019. [Online]. Available: https://www.bsr.org/reports/BSR-Google-CR-API-HRIA-Executive-Summary.pdf
  106. A. Warofka, “An independent assessment of the human rights impact of facebook in myanmar,” Facebook Newsroom, November, vol. 5, 2018.
  107. Office of Public Affairs, “Justice Department Secures Groundbreaking Settlement Agreement with Meta Platforms, Formerly Known as Facebook, to Resolve Allegations of Discriminatory Advertising,” Jun. 2022. [Online]. Available: https://www.justice.gov/opa/pr/justice-department-secures-groundbreaking-settlement-agreement-meta-platforms-formerly-known
  108. ORCAA, “ORCAA,” Jul. 2023. [Online]. Available: https://orcaarisk.com
  109. Eticas, “Algorithmic Audits,” 2022. [Online]. Available: https://eticas.tech/algorithmic-audits
  110. BABL AI, “About Us.” [Online]. Available: https://babl.ai/about-us/
  111. ——, “Services.” [Online]. Available: https://babl.ai/services/
  112. ORCAA, “Description of Algorithmic Audit: Pre-built Assessments,” ORCAA, Tech. Rep., Dec. 2020. [Online]. Available: https://www.hirevue.com/resources/template/orcaa-report
  113. S. Samuel, “There’s something missing from the White House’s AI ethics blueprint,” Vox, Oct. 2022. [Online]. Available: https://www.vox.com/future-perfect/23387228/ai-bill-of-rights-white-house-artificial-intelligence-bias
  114. A. C. Engler, “Independent auditors are struggling to hold AI companies accountable,” Fast Company, Jan. 2021. [Online]. Available: https://www.fastcompany.com/90597594/ai-algorithm-auditing-hirevue
  115. N. Diakopoulos, “Algorithmic accountability: Journalistic investigation of computational power structures,” Digital journalism, vol. 3, no. 3, pp. 398–415, 2015. Hao and Seetharaman [2023] K. Hao and D. Seetharaman, “Cleaning Up ChatGPT Takes Heavy Toll on Human Workers,” Wall Street Journal, Jul. 2023. [Online]. Available: https://www.wsj.com/articles/chatgpt-openai-content-abusive-sexually-explicit-harassment-kenya-workers-on-human-workers-cf191483 Hill [2020] K. Hill, “Wrongfully Accused by an Algorithm,” The New York Times, Jun. 2020. [Online]. Available: https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html Hao and Swart [2022] K. Hao and H. Swart, “South Africa’s private surveillance machine is fueling a digital apartheid,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/19/1049996/south-africa-ai-surveillance-digital-apartheid/ Hao and Hernández [2022] K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Hao and D. Seetharaman, “Cleaning Up ChatGPT Takes Heavy Toll on Human Workers,” Wall Street Journal, Jul. 2023. [Online]. Available: https://www.wsj.com/articles/chatgpt-openai-content-abusive-sexually-explicit-harassment-kenya-workers-on-human-workers-cf191483 Hill [2020] K. Hill, “Wrongfully Accused by an Algorithm,” The New York Times, Jun. 2020. [Online]. Available: https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html Hao and Swart [2022] K. Hao and H. Swart, “South Africa’s private surveillance machine is fueling a digital apartheid,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/19/1049996/south-africa-ai-surveillance-digital-apartheid/ Hao and Hernández [2022] K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Hill, “Wrongfully Accused by an Algorithm,” The New York Times, Jun. 2020. [Online]. Available: https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html Hao and Swart [2022] K. Hao and H. Swart, “South Africa’s private surveillance machine is fueling a digital apartheid,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/19/1049996/south-africa-ai-surveillance-digital-apartheid/ Hao and Hernández [2022] K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Hao and H. Swart, “South Africa’s private surveillance machine is fueling a digital apartheid,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/19/1049996/south-africa-ai-surveillance-digital-apartheid/ Hao and Hernández [2022] K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  116. K. Hao and D. Seetharaman, “Cleaning Up ChatGPT Takes Heavy Toll on Human Workers,” Wall Street Journal, Jul. 2023. [Online]. Available: https://www.wsj.com/articles/chatgpt-openai-content-abusive-sexually-explicit-harassment-kenya-workers-on-human-workers-cf191483 Hill [2020] K. Hill, “Wrongfully Accused by an Algorithm,” The New York Times, Jun. 2020. [Online]. Available: https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html Hao and Swart [2022] K. Hao and H. Swart, “South Africa’s private surveillance machine is fueling a digital apartheid,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/19/1049996/south-africa-ai-surveillance-digital-apartheid/ Hao and Hernández [2022] K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Hill, “Wrongfully Accused by an Algorithm,” The New York Times, Jun. 2020. [Online]. Available: https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html Hao and Swart [2022] K. Hao and H. Swart, “South Africa’s private surveillance machine is fueling a digital apartheid,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/19/1049996/south-africa-ai-surveillance-digital-apartheid/ Hao and Hernández [2022] K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Hao and H. Swart, “South Africa’s private surveillance machine is fueling a digital apartheid,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/19/1049996/south-africa-ai-surveillance-digital-apartheid/ Hao and Hernández [2022] K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  117. K. Hill, “Wrongfully Accused by an Algorithm,” The New York Times, Jun. 2020. [Online]. Available: https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html Hao and Swart [2022] K. Hao and H. Swart, “South Africa’s private surveillance machine is fueling a digital apartheid,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/19/1049996/south-africa-ai-surveillance-digital-apartheid/ Hao and Hernández [2022] K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Hao and H. Swart, “South Africa’s private surveillance machine is fueling a digital apartheid,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/19/1049996/south-africa-ai-surveillance-digital-apartheid/ Hao and Hernández [2022] K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  118. K. Hao and H. Swart, “South Africa’s private surveillance machine is fueling a digital apartheid,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/19/1049996/south-africa-ai-surveillance-digital-apartheid/ Hao and Hernández [2022] K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  119. K. Hao and A. P. Hernández, “How the AI industry profits from catastrophe,” MIT Technology Review, Apr. 2022. [Online]. Available: https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels/ Heikkilä [2022b] M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  120. M. Heikkilä, “The viral AI avatar app Lensa undressed me—without my consent,” MIT Technology Review, Dec. 2022. [Online]. Available: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ ProPublica [2023] ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  121. ProPublica, “Investigative Journalism and News in the Public Interest,” Oct. 2023. [Online]. Available: https://www.propublica.org/ [128] The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  122. The Markup, “About Us.” [Online]. Available: https://themarkup.org/about Jeffries and Yin [2021] A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  123. A. Jeffries and L. Yin, “Amazon Puts Its Own “Brands” First Above Better-Rated Products – The Markup,” The Markup, Oct. 2021. [Online]. Available: https://themarkup.org/amazons-advantage/2021/10/14/amazon-puts-its-own-brands-first-above-better-rated-products Angwin and Parris Jr. [2016] J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  124. J. Angwin and T. Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, Oct. 2016. [Online]. Available: https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Keegan [2021] J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  125. J. Keegan, “Facebook Got Rid of Racial Ad Categories. Or Did It? – The Markup,” The Markup, Jul. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/07/09/facebook-got-rid-of-racial-ad-categories-or-did-it Faife and Ng [2021] C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  126. C. Faife and A. Ng, “Credit Card Ads Were Targeted by Age, Violating Facebook’s Anti-Discrimination Policy – The Markup,” The Markup, Apr. 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/04/29/credit-card-ads-were-targeted-by-age-violating-facebooks-anti-discrimination-policy The Markup [2022] The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  127. The Markup, “Citizen Browser,” Oct. 2022. [Online]. Available: https://themarkup.org/series/citizen-browser Ng and Faife [2021] A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  128. A. Ng and C. Faife, “Facebook Pledges to Remove Discriminatory Credit and Loan Ads Discovered by The Markup – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/citizen-browser/2021/05/04/facebook-pledges-to-remove-discriminatory-credit-and-loan-ads-discovered-by-the-markup Yin [2021] L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  129. L. Yin, “Citing Markup Investigation, Civil Rights Group Demands Racial Equity Audit at Google – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/google-the-giant/2021/05/04/citing-markup-investigation-civil-rights-group-demands-racial-equity-audit-at-google Carollo [2021] M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  130. M. Carollo, “The Markup’s Work Cited in Effort to Outlaw Discriminatory Algorithms – The Markup,” The Markup, Dec. 2021. [Online]. Available: https://themarkup.org/locked-out/2021/12/17/the-markups-work-cited-in-effort-to-outlaw-discriminatory-algorithms Lecher and Keegan [2021] C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  131. C. Lecher and J. Keegan, “Nevada Lawmakers Introduce Privacy Legislation After Markup Investigation into Vaccine Websites – The Markup,” The Markup, May 2021. [Online]. Available: https://themarkup.org/blacklight/2021/05/18/nevada-lawmakers-introduce-privacy-legislation-after-markup-investigation-into-vaccine-websites Benner et al. [2019] K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  132. K. Benner, G. Thrush, and M. Isaac, “Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says,” The New York Times, Mar. 2019. [Online]. Available: https://www.nytimes.com/2019/03/28/us/politics/facebook-housing-discrimination.html Feiner [2022] L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  133. L. Feiner, “DOJ settles lawsuit with Facebook over allegedly discriminatory housing advertising,” CNBC, Jun. 2022. [Online]. Available: https://www.cnbc.com/2022/06/21/doj-settles-with-facebook-over-allegedly-discriminatory-housing-ads.html Carollo [2023] M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  134. M. Carollo, “An Algorithm Decides Who Gets a Liver Transplant. Here Are 5 Things to Know. – The Markup,” May 2023. [Online]. Available: https://themarkup.org/hello-world/2023/05/20/an-algorithm-decides-who-gets-a-liver-transplant-here-are-5-things-to-know Carollo and Tanen [2023] M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  135. M. Carollo and B. Tanen, “Poorer States Suffer Under New Organ Donation Rules, As Livers Go to Waste – The Markup,” The Markup, Mar. 2023. [Online]. Available: https://themarkup.org/organ-failure/2023/03/21/poorer-states-suffer-under-new-organ-donation-rules-as-livers-go-to-waste Electronic Frontier Foundation [2007] Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  136. Electronic Frontier Foundation, “About EFF,” Jul. 2007. [Online]. Available: https://www.eff.org/about Refugee Law Lab [2020] Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  137. Refugee Law Lab, “Refugee Law Lab,” Jul. 2020. [Online]. Available: https://refugeelab.ca/ [144] The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  138. The Citizen Lab, “About the Citizen Lab.” [Online]. Available: https://citizenlab.ca/about/ Migration Tech Monitor [2023] Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  139. Migration Tech Monitor, “Migration and Technology Monitor,” Feb. 2023. [Online]. Available: https://www.migrationtechmonitor.com Ada Lovelace Institute [2023] Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  140. Ada Lovelace Institute, “About,” 2023. [Online]. Available: https://www.adalovelaceinstitute.org/about/ [147] American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  141. American Civil Liberties Union, “Home.” [Online]. Available: https://www.aclu.org/ [148] Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  142. Migration Tech Monitor, “About Us.” [Online]. Available: https://www.migrationtechmonitor.com/about-us Gullo [2022] K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  143. K. Gullo, “EFF Client Erik Johnson and Proctorio Settle Lawsuit Over Bogus DMCA Claims,” Mar. 2022. [Online]. Available: https://www.eff.org/deeplinks/2022/03/eff-client-eric-johnson-and-proctorio-settle-lawsuit-over-bogus-dmca-claims ACLU of Idaho [2016] ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  144. ACLU of Idaho, “K.W. v. Armstrong | ACLU of Idaho,” Jul. 2016. [Online]. Available: https://www.acluidaho.org/en/cases/kw-v-armstrong Information Commissioner’s Office [2023a] Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  145. Information Commissioner’s Office, “Information Commissioner’s Office (ICO),” Oct. 2023. [Online]. Available: https://ico.org.uk/ National institute of Standards and Technology [2023] National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  146. National institute of Standards and Technology, “National Institute of Standards and Technology,” Oct. 2023. [Online]. Available: https://www.nist.gov/ Information Commissioner’s Office [2023b] Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  147. Information Commissioner’s Office, “ICO fines TikTok £12.7 million for misusing children’s data,” May 2023. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/ Information Commissioner’s Office [2022] ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  148. ——, “Audits and overview reports,” Jan. 2022. [Online]. Available: https://ico.org.uk/action-weve-taken/audits-and-overview-reports/ [155] National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  149. National institute of Standards and Technology AIRC Team, “NIST Trustworthy & Responsible AI Resource Center.” [Online]. Available: https://airc.nist.gov/home AI [2023] N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  150. N. AI, “Artificial intelligence risk management framework (ai rmf 1.0),” 2023. Information Commissioner’s Office [2023c] Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  151. Information Commissioner’s Office, “Annex A: Fairness in the AI Lifecycle,” Guidance on the AI Auditing Framework Draft Guidance for Consultation, Mar. 2023. [Online]. Available: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Grother et al. [2019] P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  152. P. Grother, M. Ngan, and K. Hanaoka, “Face Recognition Vendor Test Part 3: Demographic Effects,” National Institute of Standards and Technology, Gaithersburg, MD, National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8280, Dec. 2019. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf Phillips et al. [2003] P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  153. P. J. Phillips, P. Grother, R. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone, “Face recognition vendor test 2002,” in 2003 IEEE International SOI Conference. Proceedings (Cat. No. 03CH37443).   IEEE, 2003, p. 44. Ngan et al. [2020] M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  154. M. Ngan, P. Grother, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT) Part 6B: Face recognition accuracy with face masks using post-COVID-19 algorithms,” National Institute of Standards and Technology, Tech. Rep., Nov. 2020. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8331.pdf Information Commissioner’s Office [2021] Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  155. Information Commissioner’s Office, “ICO issues provisional view to fine Clearview AI Inc over £17 million,” Dec. 2021. [Online]. Available: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/ Lam et al. [2023] M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  156. M. S. Lam, A. Pandit, C. H. Kalicki, R. Gupta, P. Sahoo, and D. Metaxa, “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising,” Proceedings of the ACM on Human-Computer Interaction, vol. 7, no. CSCW2, pp. 360:1–360:37, Oct. 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3610209 Raji et al. [2020b] I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  157. I. D. Raji, T. Gebru, M. Mitchell, J. Buolamwini, J. Lee, and E. Denton, “Saving face: Investigating the ethical concerns of facial recognition auditing,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 145–151. Friedler et al. [2021] S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  158. S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian, “The (im) possibility of fairness: Different value systems require different mechanisms for fair decision making,” Communications of the ACM, vol. 64, no. 4, pp. 136–143, 2021. Selbst et al. [2019] A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  159. A. D. Selbst, D. Boyd, S. A. Friedler, S. Venkatasubramanian, and J. Vertesi, “Fairness and Abstraction in Sociotechnical Systems,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, ser. FAT* ’19.   New York, NY, USA: Association for Computing Machinery, Jan. 2019, pp. 59–68. [Online]. Available: https://dl.acm.org/doi/10.1145/3287560.3287598 Xiang [2020] A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  160. A. Xiang, “Reconciling Legal and Technical Approaches to Algorithmic Bias,” Tennessee Law Review, vol. 88, no. 3, pp. 649–724, 2020. [Online]. Available: https://heinonline.org/HOL/P?h=hein.journals/tenn88&i=669 Xiang and Raji [2019] A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  161. A. Xiang and I. D. Raji, “On the Legal Compatibility of Fairness Definitions,” Nov. 2019. [Online]. Available: http://arxiv.org/abs/1912.00761 Green [2022] B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  162. B. Green, “Escaping the impossibility of fairness: From formal to substantive algorithmic fairness,” Philosophy & Technology, vol. 35, no. 4, p. 90, 2022. Laufer et al. [2023] B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  163. B. Laufer, J. Kleinberg, K. Levy, and H. Nissenbaum, “Strategic Evaluation,” in Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, ser. EAAMO ’23.   New York, NY, USA: Association for Computing Machinery, Oct. 2023, pp. 1–12. [Online]. Available: https://dl.acm.org/doi/10.1145/3617694.3623237 Clarke [2019] Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  164. Y. D. Clarke, “Algorithmic Accountability Act of 2022,” Apr. 2019. [Online]. Available: https://www.congress.gov/117/bills/hr6580/BILLS-117hr6580ih.pdf Lenhart [2023] A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  165. A. Lenhart, “Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” Institute for Data, Democracy & Politics, George Washington University, Tech. Rep., Jun. 2023. [Online]. Available: https://iddp.gwu.edu/sites/g/files/zaxdzs5791/files/2023-06/federal_ai_legislation_v3.pdf Perrigo [2023] B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  166. B. Perrigo, “California Bill Proposes Regulating AI at State Level,” Time, Sep. 2023. [Online]. Available: https://time.com/6313588/california-ai-regulation-bill/ Mendelson [2021] P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  167. P. Mendelson, “Stop Discrimination by Algorithms Act of 2021,” Dec. 2021. [Online]. Available: https://lims.dccouncil.gov/Legislation/B24-0558 Accountable Tech [2023] E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  168. E. Accountable Tech, AI Now Institute, “Zero trust ai governance,” 2023. [Online]. Available: https://ainowinstitute.org/publication/zero-trust-ai-governance Stein and Dunlop [2023] M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  169. M. Stein and C. Dunlop, “Safe before sale,” Ada Lovelace Institute, Tech. Rep., Dec. 2023. [Online]. Available: https://www.adalovelaceinstitute.org/report/safe-before-sale/ Phan et al. [2022] T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  170. T. Phan, J. Goldenfein, M. Mann, and D. Kuch, “Economies of virtue: the circulation of ‘ethics’ in big tech,” Science as culture, vol. 31, no. 1, pp. 121–135, 2022. Boag et al. [2022] W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  171. W. Boag, H. Suresh, B. Lepe, and C. D’Ignazio, “Tech worker organizing for power and accountability,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 452–463. Widder et al. [2023] D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  172. D. G. Widder, D. Zhen, L. Dabbish, and J. Herbsleb, “It’s about power: What ethical concerns do software engineers have, and what do they (feel they can) do about them?” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023, pp. 467–479. Ryan et al. [2022] M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  173. M. Ryan, E. Christodoulou, J. Antoniou, and K. Iordanou, “An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees,” AI & Society, pp. 1–16, 2022. Schellmann [2021] H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  174. H. Schellmann, “Auditors are testing hiring algorithms for bias, but there’s no easy fix,” MIT Technology Review, February, vol. 11, p. 2021, 2021. Weidinger et al. [2023] L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  175. L. Weidinger, M. Rauh, N. Marchal, A. Manzini, L. A. Hendricks, J. Mateos-Garcia, S. Bergman, J. Kay, C. Griffin, B. Bariach, I. Gabriel, V. Rieser, and W. Isaac, “Sociotechnical Safety Evaluation of Generative AI Systems,” Oct. 2023. [Online]. Available: http://arxiv.org/abs/2310.11986 Birhane et al. [2022] A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  176. A. Birhane, P. Kalluri, D. Card, W. Agnew, R. Dotan, and M. Bao, “The values encoded in machine learning research,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 173–184. National institute of Standards and Technology [2020] National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  177. National institute of Standards and Technology, “Face Recognition Vendor Test (FRVT),” Nov. 2020. [Online]. Available: https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt National institute of Standards and Technology [2000] ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  178. ——, “Face Recogntion Vendor Test (FRVT) 2000,” 2000. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recogntion-vendor-test-frvt-2000 National institute of Standards and Technology [2002] ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  179. ——, “Face Recognition Vendor Test (FRVT) 2002,” 2002. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2002 National institute of Standards and Technology [2006] ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  180. ——, “Face Recognition Vendor Test (FRVT) 2006,” 2006. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2006 National institute of Standards and Technology [2013] ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  181. ——, “Face Recognition Vendor Test (FRVT) 2013,” 2013. [Online]. Available: https://www.nist.gov/itl/iad/image-group/face-recognition-vendor-test-frvt-2013 AWO [2023c] AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  182. AWO, “Announcing our algorithm governance services,” May 2023. [Online]. Available: https://www.awo.agency/blog/announcing-our-algorithm-governance-services/ Groves [2022] L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  183. L. Groves, “Algorithmic impact assessment: A case study in healthcare,” Ada Lovelace Institute, Tech. Rep., Feb. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ Brennan and Circiumaru [2022] J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/ J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
  184. J. Brennan and A. Circiumaru, “Getting under the hood of big tech,” Ada Lovelace Institute, Tech. Rep., Mar. 2022. [Online]. Available: https://www.adalovelaceinstitute.org/blog/getting-under-the-hood-of-big-tech/
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Abeba Birhane (24 papers)
  2. Ryan Steed (6 papers)
  3. Victor Ojewale (3 papers)
  4. Briana Vecchione (7 papers)
  5. Inioluwa Deborah Raji (25 papers)
Citations (27)

Summary

We haven't generated a summary for this paper yet.